Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the benchmarking capabilities by integrating the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for the tau2 benchmark by adding a new custom task, updating worker logic to accommodate custom tasks and skippable steps, and relaxing some configuration requirements. The overall structure of the changes is sound. However, I have identified several critical issues that need to be addressed. These include potential NameError exceptions in the new task due to incorrect variable scoping, and KeyError exceptions from unsafe dictionary access in the worker classes. Additionally, there is a bug in the new tau2 benchmark configuration file, and the new task employs module-level monkey-patching, which is a risky practice. I have provided specific comments and suggestions for fixing these issues.
| total_tasks = self._get_task_count(self.run_config) * self.run_config.num_trials | ||
| save_to = f"{self.run_config.save_to}.json" | ||
| pbar = tqdm(total=total_tasks, desc="Running TAU2 Bench", unit="task") | ||
| task_state_manager.update_task_state( |
There was a problem hiding this comment.
The task_state_manager variable is used here but it is not defined in the current scope. It seems you intended to use self.task_state_manager, which was assigned in the run method. This will cause a NameError at runtime.
| task_state_manager.update_task_state( | |
| self.task_state_manager.update_task_state( |
| new_completed = len(data.get('simulations', [])) | ||
| if new_completed > completed: | ||
| pbar.update(new_completed - completed) | ||
| task_state_manager.update_task_state( |
There was a problem hiding this comment.
| monitor_thread.join() | ||
| finally: | ||
| pbar.update(total_tasks - pbar.n) | ||
| task_state_manager.update_task_state( |
There was a problem hiding this comment.
| custom_infer = cfg.get("infer") | ||
| custom_task = None | ||
| if custom_infer: | ||
| custom_task = custom_infer["runner"]["task"].get("type") |
There was a problem hiding this comment.
Accessing nested dictionary keys directly without checking for their existence can lead to a KeyError. custom_infer['runner'] or custom_infer['runner']['task'] could fail if these keys are not present in the configuration. You should use .get() with default values for safer access.
| custom_task = custom_infer["runner"]["task"].get("type") | |
| custom_task = custom_infer.get("runner", {}).get("task", {}).get("type") |
| custom_eval = cfg.get("eval") | ||
| custom_task = None | ||
| if custom_eval: | ||
| custom_task = custom_eval["runner"]["task"].get("type") |
There was a problem hiding this comment.
Similar to the Infer class, accessing nested dictionary keys directly can cause a KeyError if runner or task keys are missing in the eval configuration. Please use .get() for safe access to prevent potential crashes.
| custom_task = custom_eval["runner"]["task"].get("type") | |
| custom_task = custom_eval.get("runner", {}).get("task", {}).get("type") |
| # ================= 替换litellm中计费函数 ================= | ||
| import litellm | ||
| import logging | ||
|
|
||
| litellm_logger = logging.getLogger("litellm") | ||
| litellm_logger.setLevel(logging.CRITICAL) | ||
|
|
||
| try: | ||
| from litellm.utils import get_response_cost as litellm_get_response_cost | ||
| except ImportError: | ||
| try: | ||
| from litellm.cost_calculator import get_response_cost as litellm_get_response_cost | ||
| except ImportError: | ||
| litellm_get_response_cost = None | ||
|
|
||
| def patched_get_response_cost(*args, **kwargs): | ||
| if litellm_get_response_cost is None: | ||
| return 0.0 | ||
| try: | ||
| return litellm_get_response_cost(*args, **kwargs) | ||
| except Exception as e: | ||
| if "This model isn't mapped yet" in str(e): | ||
| return 0.0 | ||
| raise e | ||
|
|
||
| try: | ||
| litellm.utils.get_response_cost = patched_get_response_cost | ||
| except AttributeError: | ||
| pass | ||
| try: | ||
| litellm.cost_calculator.get_response_cost = patched_get_response_cost | ||
| except AttributeError: | ||
| pass | ||
| # ================= 替换litellm中计费函数 ================= | ||
|
|
||
| DEFAULT_FAKE_API_KEY = "fake_api_key" | ||
|
|
||
| from tau2.data_model.simulation import RunConfig | ||
| from tau2.run import run_domain, get_tasks | ||
| from tau2.metrics.agent_metrics import compute_metrics | ||
|
|
||
| # ================= 替换tau2中计费函数 ================= | ||
| import tau2.utils.llm_utils as tau2_llm_utils | ||
| import loguru | ||
|
|
||
| _original_tau2_get_response_cost = tau2_llm_utils.get_response_cost | ||
| _original_tau2_logger_error = tau2_llm_utils.logger.error | ||
|
|
||
| def _patched_logger_error(message, *args, **kwargs): | ||
| if "This model isn't mapped yet" in str(message): | ||
| return | ||
| _original_tau2_logger_error(message, *args, **kwargs) | ||
|
|
||
| tau2_llm_utils.logger.error = _patched_logger_error | ||
| # ================= 替换tau2中计费函数 ================= |
There was a problem hiding this comment.
Monkey-patching libraries at the module level is a risky practice. It globally alters the behavior of litellm and tau2 and can lead to unexpected side effects, making the code harder to debug and maintain. Consider if there's a way to apply these patches more locally, for example using a context manager around the calls to run_domain. If this global patch is absolutely necessary, it should be accompanied by detailed comments explaining why it's needed and what risks are involved.
| dict( | ||
| abbr=f'tau2_bench_{task}', | ||
| args = dict( | ||
| domain = "airline", # -d, 要运行的模拟域,可选值为 get_options().domains ["airline", "retail", "telecom"] |
There was a problem hiding this comment.
The domain is hardcoded to "airline" inside a loop that iterates over sub_tasks. This means that all generated dataset configurations (tau2_bench_airline, tau2_bench_retail, tau2_bench_telecom) will incorrectly point to the 'airline' domain. You probably intended to use the loop variable task for the domain.
| domain = "airline", # -d, 要运行的模拟域,可选值为 get_options().domains ["airline", "retail", "telecom"] | |
| domain = task, # -d, 要运行的模拟域,可选值为 get_options().domains ["airline", "retail", "telecom"] |
| if obj is None: | ||
| return None |
There was a problem hiding this comment.
While adding a None check is a good improvement for robustness, the function's return type hint -> str on line 18 is now incorrect because the function can return None. Please update the signature to -> Optional[str] to accurately reflect its behavior. You will also need to add from typing import Optional at the top of the file.
| new_cfg = dict( | ||
| infer=dict( | ||
| partitioner=dict(type=get_config_type(NaivePartitioner)), | ||
| partitioner= dict(type=get_config_type(NaivePartitioner)), |
Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。
PR Type / PR类型
Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)
🔍 Motivation / 变更动机
Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。
📝 Modification / 修改内容
Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。
📐 Associated Test Results / 关联测试结果
Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。
Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。
If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。
🌟 Use cases (Optional) / 使用案例(可选)
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。
✅ Checklist / 检查列表
Before PR:
After PR:
👥 Collaboration Info / 协作信息
🌟 Useful CI Command / 实用的CI命令
/gemini review/gemini summary/gemini help/readthedocs build